8 research outputs found

    On the Feasibility of Utilizing Commercial 4G LTE Systems for Misson-Critical IoT Applications

    Full text link
    Emerging Internet of Things (IoT) applications and services including e-healthcare, intelligent transportation systems, smart grid, and smart homes to smart cities to smart workplace, are poised to become part of every aspect of our daily lives. The IoT will enable billions of sensors, actuators, and smart devices to be interconnected and managed remotely via the Internet. Cellular-based Machine-to-Machine (M2M) communications is one of the key IoT enabling technologies with huge market potential for cellular service providers deploying Long Term Evolution (LTE) networks. There is an emerging consensus that Fourth Generation (4G) and 5G cellular technologies will enable and support these applications, as they will provide the global mobile connectivity to the anticipated tens of billions of things/devices that will be attached to the Internet. Many vital utilities and service industries are considering the use of commercially available LTE cellular networks to provide critical connections to users, sensors, and smart M2M devices on their networks, due to its low cost and availability. Many of these emerging IoT applications are mission-critical with stringent requirements in terms of reliability and end-to-end (E2E) delay bound. The delay bound specified for each application refers to the device-to-device latencies, which is defined as the combined delay resulting from both application level processing time and communication latency. Each IoT application has its own distinct performance requirements in terms of latency, availability, and reliability. Typically, uplink (UL) traffic of most of these IoT applications is the dominant network traffic (much higher than total downlink (DL) traffic). Thus, efficient LTE UL scheduling algorithms at the base station (“Evolved NodeB (eNB)” per 3GPP standards) are more critical for M2M applications. LTE, however, was not originally intended for IoT applications, where traffic generated by M2M devices (running IoT applications) has totally different characteristics than those from traditional Human-to-Human (H2H)-based voice/video and data communications. In addition, due to the anticipated massive deployment of M2M devices and the limited available radio spectrum, the problem of efficient radio resources management (RRM) and UL scheduling poses a serious challenge in adopting LTE for M2M communications. Existing LTE quality of service (QoS) standard and UL scheduling algorithms were mainly optimized for H2H services and can’t accommodate such a wide range of diverging performance requirements of these M2M-based IoT applications. Though 4G LTE networks can support very low Packet Loss Ratio (PLR) at the physical layer, such reliability, however, comes at the expense of increased latency from tens to hundreds of ms due to the aggressive use of retransmission mechanisms. Current 4G LTE technologies may satisfy a single performance metric of these mission critical applications, but not the simultaneous support of ultra-high reliability and low latency as well as high data rates. Numerous QoS aware LTE UL scheduling algorithms for supporting M2M applications as well as H2H services have been reported in the literature. Most of these algorithms, however, were not intended for the support of mission critical IoT applications, as they are not latency-aware. In addition, these algorithms are simplified and don’t fully conform to LTE’s signaling and QoS standards. For instance, a common practice is the assumption that the time domain UL scheduler located at the eNB prioritizes user equipment (UEs)/M2M devices connection requests based on the head-of-line (HOL) packet waiting time at the UE/device transmission buffer. However, as will be detailed below, LTE standard does not support a mechanism that enables the UEs/devices to inform the eNB uplink scheduler about the waiting time of uplink packets residing in their transmission buffers. Ultra-Reliable Low-Latency Communication (URLLC) paradigm has recently emerged to enable a new range of mission-critical applications and services including industrial automation, real-time operation and control of the smart grid, inter-vehicular communications for improved safety and self-deriving vehicles. URLLC is one of the most innovative 5G New Radio (NR) features. URLLC and its supporting 5G NR technologies might become a commercial reality in the future, but it may be rather a distant future. Thus, deploying viable mission critical IoT applications will have to be postponed until URLLC and 5G NR technologies are commercially feasible. Because IoT applications, specifically mission critical, will have a significant impact on the welfare of all humanity, the immediate or near-term deployments of these applications is of utmost importance. It is the purpose of this thesis to explore whether current commercial 4G LTE cellular networks have the potential to support some of the emerging mission critical IoT applications. Smart grid is selected in this work as an illustrative IoT example because it is one of the most demanding IoT applications, as it includes diverse use cases ranging from mission-critical applications that have stringent requirements in terms of E2E latency and reliability to those that require support of massive number of connected M2M devices with relaxed latency and reliability requirements. The purpose of thesis is two fold: First, a user-friendly MATLAB-based open source software package to model commercial 4G LTE systems is developed. In contrast to mainstream commercial LTE software packages, the developed package is specifically tailored to accurately model mission critical IoT applications and above all fully conforms to commercial 4G LTE signaling and QoS standards. Second, utilizing the developed software package, we present a detailed realistic LTE UL performance analysis to assess the feasibility of commercial 4G LTE cellular networks when used to support such a diverse set of emerging IoT applications as well as typical H2H services

    Protective efficacy of catalytic bioscavenger, paraoxonase 1 against sarin and soman exposure in guinea pigs

    Get PDF
    Human paraoxonase 1 (PON1) has been portrayed as a catalytic bioscavenger which can hydrolyze large amounts of chemical warfare nerve agents (CWNAs) and organophosphate (OP) pesticides compared to the stoichiometric bioscavengers such as butyrylcholinesterase. We evaluated the protective efficacy of purified human and rabbit serum PON1 against nerve agents sarin and soman in guinea pigs. Catalytically active PON1 purified from human and rabbit serum was intravenously injected to guinea pigs, which were 30 min later exposed to 1.2 Ă— LCt50 sarin or soman using a microinstillation inhalation exposure technology. Pre-treatment with 5 units of purified human and rabbit serum PON1 showed mild to moderate increase in the activity of blood PON1, but significantly increased the survival rate with reduced symptoms of CWNA exposure. Although PON1 is expected to be catalytic, sarin and soman exposure resulted in a significant reduction in blood PON1 activity. However, the blood levels of PON1 in pre-treated animals after exposure to nerve agent were higher than that of untreated control animals. The activity of blood acetylcholinesterase and butyrylcholinesterase and brain acetylcholinesterase was significantly higher in PON1 pre-treated animals and were highly correlated with the survival rate. Blood O2 saturation, pulse rate and respiratory dynamics were normalized in animals treated with PON1 compared to controls. These results demonstrate that purified human and rabbit serum PON1 significantly protect against sarin and soman exposure in guinea pigs and support the development of PON1 as a catalytic bioscavenger for protection against lethal exposure to CWNAs

    Recombinant paraoxonase 1 protects against sarin and soman toxicity following microinstillation inhalation exposure in guinea pigs

    Get PDF
    To explore the efficacy of paraoxonase 1 (PON1) as a catalytic bioscavenger, we evaluated human recombinant PON1 (rePON1) expressed in Trichoplusia ni larvae against sarin and soman toxicity using microinstillation inhalation exposure in guinea pigs. Animals were pretreated intravenously with catalytically active rePON1, followed by exposure to 1.2 X LCt50 sarin or soman. Administration of 5 units of rePON1 showed mild increase in the blood activity of the enzyme after 30 min, but protected the animals with a significant increase in survival rate along with minimal signs of nerve agent toxicity. Recombinant PON1 pretreated animals exposed to sarin or soman prevented the reduction of blood O2 saturation and pulse rate observed after nerve agent exposure. In addition, rePON1 pretreated animals showed significantly higher blood PON1, acetylcholinesterase (AChE), and butyrylcholinesterase activity after nerve agent exposure compared to the respective controls without treatments. AChE activity in different brain regions of rePON1 pretreated animals exposed to sarin or soman were also significantly higher than respective controls. The remaining activity of blood PON1, cholinesterases and brain AChE in PON1 pretreated animals after nerve agent exposure correlated with the survival rate. In summary, these data suggest that human rePON1 protects against sarin and soman exposure in guinea pigs

    Factors influencing unmet need for family planning among Ghanaian married/union women: a multinomial mixed effects logistic regression modelling approach

    No full text
    Abstract Background Unmet need for family planning is high (30%) in Ghana. Reducing unmet need for family planning will reduce the high levels of unintended pregnancies, unsafe abortions, maternal and neonatal morbidity and mortality. The purpose of this study was to examine factors that are associated with unmet need for family planning to help scale up the uptake of family planning services in Ghana. Methods This cross sectional descriptive and inferential study involved secondary data analysis of women in the reproductive age (15–49 years) from the Ghana Demographic and Health Survey 2014 data. The outcome variable was unmet need for family planning which was categorized into three as no unmet need, unmet need for limiting and unmet need for spacing. Chi-squared test statistic and bivariate multilevel multinomial mixed effects logistic regression model were used to determine significant variables which were included for the multivariable multilevel multinomial mixed effects logistic regression model. All significant variables (p < 0.05) based on the bivariate analysis were included in the multinomial mixed effects logistic regression model via model building approach. Results Women who fear contraceptive side effects were about 2.94 (95% CI, 2.28, 3.80) and 2.58 (95% CI, 2.05, 3.24) times more likely to have an unmet need for limiting and spacing respectively compared to those who do not fear side effects. Respondents’ age was a very significant predictor of unmet need for family planning. There was very high predictive probability among 45–49 year group (0.86) compared to the 15–19 year group (0.02) for limiting. The marginal predictive probability for spacing changed significantly from 0.74 to 0.04 as age changed from 15 to 19 to 45–49 years. Infrequent sexual intercourse, opposition from partners, socio-economic (wealth index, respondents educational level, respondents and partner’s occupation) and cultural (religion and ethnicity) were all significant determinants of both unmet need for limiting and spacing. Conclusions This study reveals that fear of side effect, infrequent sex, age, ethnicity, partner’s education and region were the most highly significant predictors of both limiting and spacing. These factors must be considered in trying to meet the unmet need for family planning
    corecore